An adaptive incremental LBG for vector quantization

نویسندگان

  • F. Shen
  • O. Hasegawa
چکیده

This study presents a new vector quantization method that generates codewords incrementally. New codewords are inserted in regions of the input vector space where the distortion error is highest until the desired number of codewords (or a distortion error threshold) is achieved. Adoption of the adaptive distance function greatly increases the proposed method's performance. During the incremental process, a removal-insertion technique is used to fine-tune the codebook to make the proposed method independent of initial conditions. The proposed method works better than some recently published efficient algorithms such as Enhanced LBG (Patane, & Russo, 2001) for traditional tasks: with fixed number of codewords, to find a suitable codebook to minimize distortion error. The proposed method can also be used for new tasks that are insoluble using traditional methods: with fixed distortion error, to minimize the number of codewords and find a suitable codebook. Experiments for some image compression problems indicate that the proposed method works well.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Expansive competitive learning for kernel vector quantization

In this paper we present a necessary and sufficient condition for global optimality of unsupervised Learning Vector Quantization (LVQ) in kernel space. In particular, we generalize the results presented for expansive and competitive learning for vector quantization in Euclidean space, to the general case of a kernel-based distance metric. Based on this result, we present a novel kernel LVQ algo...

متن کامل

Centroid neural network adaptive resonance theory for vector quantization

In this paper, a novel unsupervised competitive learning algorithm, called the centroid neural network adaptive resonance theory (CNN-ART) algorithm, is proposed to relieve the dependence on the initial codewords of the codebook in contrast to the conventional algorithms with vector quantization in lossy image compression. The design of the CNN-ART algorithm is mainly based on the adaptive reso...

متن کامل

Learning Tree-Structured Vector Quantization for Image Compression

Kohonen's self-organizing feature map (KSOFM) is an adaptive vector quantization (VQ) scheme for progressive code vector update. However, KSOFM approach belongs to unconstrained vector quantization, which suuers from exponential growth of the codebook. In this paper, a learning tree-structured vector quantization (LTSVQ) is presented for overcoming this drawback, which is based on competitive l...

متن کامل

Adaptive Multiscale Redistribution for Vector Quantization

Vector quantization is a classical problem that appears in many fields. Unfortunately, the quantization problem is generally nonconvex, and therefore affords many local minima. The main problem is finding an initial approximation which is close to a “good” local minimum. Once such an approximation is found, the Lloyd–Max method may be used to reach a local minimum near it. In recent years, much...

متن کامل

Vector Quantization using the Improved Differential Evolution Algorithm for Image Compression

Vector Quantization (VQ) is a popular image compression technique with a simple decoding architecture and high compression ratio. Codebook designing is the most essential part in Vector Quantization. Linde–Buzo–Gray (LBG) is a traditional method of generation of VQ Codebook which results in lower PSNR value. A Codebook affects the quality of image compression, so the choice of an appropriate co...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Neural networks : the official journal of the International Neural Network Society

دوره 19 5  شماره 

صفحات  -

تاریخ انتشار 2006